Casper CBC for ETH2.0
Deprecated!!!
Proposal 1
We need to changes the current beacon chain spec as below to enforce (bitwise) LMD GHOST.
Fork choice
now: "favor the highest justified checkpoint then use LMD GHOST"
new: "just use LMD GHOST".
Parameter
LATEST_BLOCK_ROOTS_LENGTH
now: 2**13 (= 8,192) (slots) ~13 hours
new: 2**22 (= 4,194,304) (slots) ~ 1 year
Beacon state
now: validator_balances list → new: validator_volatile_data list:
ValidatorVolatileData = {balance: uint64, last_virtual_agreement_height: uint64, last_virtual_at_height: uint64}
add off_chain_block_hashes = List[Tuple[Hash32, uint64]]
(a hash of a block header, a virtual agreement height)
removed if their slot is < current slot - 2**22 (ie. after ~1 year).
※ "off chain block" = "uncles and distant cousins"
※ latest_block_roots = List[bytes32] also used.
Block validity
Its parent need to be the result of executing the LMD GHOST fork choice rule given all the evidence the block knows about.
Sort the last_virtual_agreement_height values of all validators and the last_virtual_at_height values of all validators
Verify that there exists no height H such that the total deposit size of validators whose last_virtual_agreement_height is > H is less than the half of (the total deposit size of validators whose last_virtual_agreement_height >= H) - (the total deposit size of validators whose last_virtual_at_height is H)
nrryuya.icon > Here, bitwise LMD GHOST comes in.
Method
Add submit_offchain_block_header(BlockHeader)
BlockHeader: equivalent to a block except replacing all non-top-level items with their hashes
checks
the parent is already included in either latest_block_roots or off_chain_block_hashes
the hash of the header itself is not in either.
then, adds to off_chain_block_hashes
if the parent is in off_chain_block_hashes
"virtual agreement height": the same as for the parent,
if the parent is the value in latest_block_roots (corresponding to slot N)
"virtual agreement height": N * 256 + the number of most-significant-bits by which "the hash of the off-chain block" and "the hash of the value in latest_block_roots corresponding to slot N+1" agree
e.g.
if the hashes start with 0x35 and 0x27 then they agree on three bits
if they start with 0x4c8 and 0x4cf then they agree on nine bits.
Attestation
only be included if the block it points to is in either latest_block_roots or off_chain_block_hashes.
then, update validator_volatile_data
last_virtual_at_height: 256 times the slot number of the message the attestation is signing
last_virtual_agreement_height:
if the block is in latest_block_roots: last_virtual_at_height
if the block is in off_chain_block_hashes: the virtual agreement height of the off-chain block header
Proposal 2
Beacon state
add two arrays:
balance_agreeing_upto: List[List[int]]
balance_at: List[List[int]]
initialized as [[] for _ in range(LMD_GHOST_LOOKBACK)]
helper functions
code: Python
def two_d_array_get(array, x, y):
return arrayxy if y < len(arrayx) else 0 def two_d_array_set(array, x, y, value):
if value != 0 and y >= len(arrayx): arrayx += 0 * (y - len(arrayx) + 1) while len(arrayx) > 0 and arrayx-1 == 0: Attestation Processing
generate a list: List[Tuple[int, int, int, int, int]]
tuple:
validator index
previous_virtual_agreement_height
new_virtual_agreement_height
previous_virtual_at_height
new_virtual_at_height
then transform that list into a set of lists:
[(agreement_height, delta_balance), ...]
[(at_height, delta_balance)].
For each of these items, run this
code: Python
slot = (agreement_height // 256) % LMD_GHOST_LOOKBACK
bits = agreement_height % 256
two_d_array_set(balance_agreeing_upto, slot, bits, two_d_array_get(balance_agreeing_upto, slot, bits) + delta_balance)
Note: everything so far is O(1) per attestation with no cryptography, through making L updates to the Merkle tree will require recomputing L * log(LMD_GHOST_LOOKBACK / L) hashes.
To handle updating balances, we do a simple trick for optimization:
we store in the validator record a whole-integer value that represents their balance for LMD GHOST purposes, and only adjust it downwards to the next lower integer when one's balance drops below xx.0 and adjust it upwards to the next higher integer when one's balance rises above xx.5.
Only when doing this adjustment do we also perform an adjustment to agreement_height and at_height to reflect that validator's new balance.
Block validity
run the same check as above, except we directly use right-facing partial sums of a flattened version of agreement_height rotated by block.slot % LMD_GHOST_LOOKBACK.
For example, suppose LMD_GHOST_LOOKBACK = 4 and block.slot % LMD_GHOST_LOOKBACK = 1 and agreement_height = [[10, 20], [30, 80, 50], [], [40]].
Then, agreement_height rotated would be [[30, 80, 50], [], [40], [10, 20], the flattened version is [30, 80, 50, 40, 10, 20] and the partial sums are [230, 200, 120, 70, 30, 20] , which may be invalid unless at least there are >= 10 votes at the same position as the 40.
Slashing condition
1. A validator signs two different blocks in the same epoch
2. A validator signs a block with a slot number of C, in which their last_virtual_at_height is A, and they also sign a block with a slot number B, with A < B < C
About 1, we can use same code with the current FFG.
The below is about 2.
SlashingProof object
code: Python
{
index: uint64, // index of the validator in the validator set
data1: SlashableAttestation,
data2: SlashableAttestation,
merkle_branch_bottom: ValidatorVolatileData,
merkle_branch_in_volatile_data_tree: hash32, block_header_1: BlockHeader,
block_header_2: BlockHeader,
}
Proof validity
Both SlashableAttestation objects pass a verify_slashable_vote_data check
index is part of intersection(union(data1.custody_bit_0_indices, data1.custody_bit_1_indices), union(data2.custody_bit_0_indices, data2.custody_bit_1_indices))
The root calculated by starting from merkle_branch_bottom and applying merkle_branch_in_volatile_data_tree using index as an index is the validator_volatile_data root in the block_header_1
hash(block_header_1) == data1.data and hash(block_header_2) == data2.data
block_header_1.slot // EPOCH_LENGTH > block_header_2.slot // EPOCH_LENGTH > merkle_branch_bottom.last_virtual_at_height // (EPOCH_LENGTH * 256)
Note that this is almost but not quite sufficient. The reason is that an attacker could make a signature on the main chain at height H1, then sign on a fake off-chain block that only has a header at height H2, include that signature in the main chain and sign at height H3, then keep signing on the main chain, and then sign a message on another chain at a height between H2 and H3. The fact that the Merkle branch for height H2 is absent means that there is no way to catch and penalize the validator. We can solve this in two ways:
Add an additional challenge-response game where anyone can require any validator to publish the Merkle root for their own index for any signature that they participated in
Require clients to actually verify the off-chain blocks before they accept any chain that references them, and make this a validity rule for the chain
(1) could be extended into a general "proof of custody of beacon chain state" mechanism, which may also be useful for other reasons.
Dynamic validator sets
Overview
We will follow the pattern of "change the validator set by a maximum of a few percent per day, so it keeps rotating regardless of finality"
This way we don't need to verify finality on-chain and we can avoid on-chain thresholds.
We would change the validator set every N slots (think: 1 day) by a maximum of 1/64.
BeaconState
We can do this by moving the existing withdrawal queue mechanism to rate-limit exits rather than withdrawals (so we get first-come-first-served prioritization for exits, rather than the current lowest-index-first-served,
which is okay at present because we're allowing up to 1/64 of all validators to exit every epoch, but would be less okay if we make exiting actually slow).
We add an additional identical queue, activation_queue, that affects validator activations.
We thus have both a validator_registry_exit_count and a validator_registry_activation_count in the state, and we simply reuse the exit_count field in the ValidatorRecord for both.
Forkchoice
Modify the forkchoice to
Execute the fork choice rule with the validator set that's active at slot N and verify that the result is the block in the chain at slot 2N-1
Then execute the fork choice rule with the validator set that's active at 2N and verify that the result is the block in the chain at slot 3N-1
And so on.
Validity condition
We can enforce the following rule. Suppose the current block slot is in [k*N .... (k+1)*N-1]
Conditional on the block in the chain at slot k*N being correct, the current block's parent is the winner of the LMD GHOST fork choice using the most recent validator set
Conditional on the block in the chain at slot (k-1)*N being correct, the winner of the LMD GHOST fork choice either is the block in the chain at slot k*N, or could be modified to be by changing 1/64 of the validators' messages
Conditional on the block in the chain at slot (k-2)*N being correct, the winner of the LMD GHOST fork choice either is the block in the chain at slot (k-1)*N, or could be modified to be by changing 2/64 of the validators' messages
And so on.
This would be implemented in practice by relaxing [vah > h] >= ([vah >= h] - [vath == h]) / 2 from above to [vah > h] >= ([vah >= h] - [vath == h]) * (32 - i) / 64
[vah > h] means "the total deposit size of validators with virtual agreement height more than h"
vath means "virtual at height"
i is the number of validator rotations back that we are checking
nrryuya.icon > (32 - i) / 64 = 1/2 - i/64
The i/64 slack is there to represent the possibility that even if the fork choice looks incorrect from the point of view of present validators, it could have been correct from the point of view of validators at the time;
the intention is that if a block is valid under the LMD GHOST fork choice, then it passes this check, though there is the risk that a few blocks that pass this check are not valid under LMD GHOST fork choice.
Fault tolerance
In any case, safety should only decrease by ~1/64 plus an additional 1/64 per validator rotation that the client has not yet seen a block finalized.
Resources
efficient slashing condition
if a validator signs a message at slot C of a block where their most recent message was at slot A, and they signed another message at slot B, with A < B < C, they get penalized
How to enable BLS aggregation
maintain in the state a Merkle tree of every validator’s most recent message